Validation Documentation

Table of Contents

1. Unit Tests

1.1 Branch Coverage Analysis

This section identifies all functions with branching logic in the elevator system and designs comprehensive test cases to achieve 100% branch coverage.

1.1.1 Dispatcher Class (dispatcher.py)

Function: add_call(self, floor: int, direction: str)

Branches Identified:

Test Cases:

Function: _process_pending_calls(self)

Branches Identified:

Test Cases:

Function: assign_task(self, elevator_idx: int, floor: int, call_id: Optional[str])

Branches Identified:

Test Cases:

Function: _optimize_task_queue(self, elevator: "Elevator")

Branches Identified:

Test Cases:

Function: _can_elevator_serve_call(self, elevator: "Elevator", floor: int, direction: Optional[MoveDirection])

Branches Identified:

Test Cases:

1.1.2 Elevator Class (elevator.py)

Function: update(self)

Branches Identified:

Test Cases:

Function: _determine_direction(self)

Branches Identified:

Test Cases:

Function: calculate_estimated_time(self, floor: int, direction: Optional[MoveDirection])

Branches Identified:

Test Cases:

Function: open_door(self) and close_door(self)

Branches Identified:

Test Cases:

1.1.3 ElevatorAPI Class (api/core.py)

Function: _parse_and_execute(self, command: str)

Branches Identified:

Test Cases:

Function: _handle_call_elevator(self, floor: int, direction: str)

Branches Identified:

Test Cases:

Function: _handle_select_floor(self, floor: int, elevator_id: int)

Branches Identified:

Test Cases:

Function: _handle_open_door(self, elevator_id: int) and _handle_close_door(self, elevator_id: int)

Branches Identified:

Test Cases:

Function: fetch_states(self)

Branches Identified:

Test Cases:

1.1.4 Models (models.py)

Function: Call.assign_to_elevator(self, elevator_idx: int)

Branches Identified:

Test Cases:

Function: Call.complete(self)

Branches Identified:

Test Cases:

Function: Call.is_pending(self) -> bool

Branches Identified:

Test Cases:

Function: Call.is_assigned(self) -> bool

Branches Identified:

Test Cases:

Function: Call.is_completed(self) -> bool

Branches Identified:

Test Cases:

Function: Task.is_outside_call(self) -> bool

Branches Identified:

Test Cases:

Function: validate_floor(floor: int) -> bool

Branches Identified:

Test Cases:

Function: validate_elevator_id(elevator_id: int) -> bool

Branches Identified:

Test Cases:

Function: validate_direction(direction: str) -> bool

Branches Identified:

Test Cases:

1.1.5 Simulator Class (simulator.py)

Function: set_api_and_initialize_components(self, api: "ElevatorAPI")

Branches Identified:

Test Cases:

Function: update(self)

Branches Identified:

Test Cases:

1.2 Branch Coverage Results (Actuals from Pytest)

The following branch coverage results were obtained from running python -m pytest src/test/units/ --cov=src/backend --cov-branch --cov-report=term-missing on June 8, 2025.

Core Component Branch Coverage:

Overall Branch Coverage for Core Components:

Detailed Coverage Report:

Name                        Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------------
src\backend\__init__.py         6      0      0      0   100%
src\backend\api\core.py       222     79     76      8    64%   22, 51, 89, 108, 125, 141, 149, 282, 294-301, 305-319, 329-333, 343-351, 355-356, 360-361, 365-380, 384-401, 405-415, 419-429
src\backend\api\zmq.py        125    101     24      0    16%   19-38, 42-56, 60-63, 67, 71-74, 78, 82-83, 87-90, 94-105, 109-164, 168, 172-184, 188-189
src\backend\dispatcher.py     136     33     76      5    71%   8-9, 57->27, 71, 77-79, 92->97, 174, 178-179, 188-225
src\backend\elevator.py       238     19    122     25    87%   8-9, 74, 102->exit, 109->exit, 117, 121->exit, 137->exit, 164->171, 168-169, 180->exit, 184->exit, 191->exit, 209->exit, 294, 296, 359-360, 367-372, 401->434, 413->434, 420-431, 436->494, 450-456, 459->491, 470->491, 477-488
src\backend\models.py          61      4      0      0    93%   53, 78-79, 96
src\backend\simulator.py       28      4      8      1    86%   10, 51-53
src\backend\utility.py         24     17     10      1    24%   6->10, 16-34, 48-59
-----------------------------------------------------------------------
TOTAL                         840    257    316     40    67%

1.3 Test Implementation Strategy

1.3.1 Test File Structure

src/test/units/
├── test_dispatcher.py      # TC1-TC26
├── test_elevator.py        # TC27-TC61  
├── test_api_core.py        # TC62-TC92
├── test_models.py          # TC93-TC111, Model validation tests
├── test_simulator.py       # TC112-TC115
└── conftest.py            # Shared fixtures and utilities

1.3.2 Test Categories

State Transition Tests:

Boundary Value Tests:

Error Handling Tests:

Concurrency Tests:

1.3.3 Mock Objects and Fixtures

Required Mocks:

Test Fixtures:

1.3.4 Assertion Strategy

Branch Coverage Assertions:

Functional Assertions:

1.4 Test Execution Requirements

1.4.1 Prerequisites

1.4.2 Execution Commands

# Run all unit tests with coverage
pytest src/test/units/ --cov=src/backend --cov-branch --cov-report=html

# Run specific test categories
pytest src/test/units/test_dispatcher.py -v
pytest src/test/units/test_elevator.py -v  
pytest src/test/units/test_api_core.py -v
pytest src/test/units/test_models.py -v
pytest src/test/units/test_simulator.py -v

# Generate coverage report
coverage report --show-missing --skip-covered

1.4.3 Success Criteria

1.5 Maintenance and Updates

1.5.1 Test Consistency

1.5.2 Documentation Updates

1.5.3 Regression Prevention


Note: This unit test documentation provides the foundation for implementing comprehensive test coverage. The actual test code should be developed following this specification to ensure complete branch coverage and system reliability.

1.2 Integration Tests

Integration tests validate the interactions between major system components, ensuring data flows correctly and component interfaces work together seamlessly.

1.2.1 Component Interaction Identification

Critical Integration Points:

  1. Dispatcher ↔ Elevator Coordination

  2. API ↔ ZMQ Client Communication

  3. Backend ↔ Frontend State Sync

  4. Engine ↔ Elevator Movement Control

  5. Cross-Component State Consistency

1.2.2 Test Coverage Items (Equivalent Partitioning)

Input Validation Categories:

System State Categories:

Communication Protocol Categories:

1.2.3 Test Cases Design

Multi-Component Workflow Tests:

IT1: Call → Dispatch → Movement → Arrival → Door Open

IT2: Multiple Simultaneous Calls to Different Floors

IT3: Floor Selection While Elevator Moving

IT4: Door Control During Movement Requests

IT5: System Reset with Active Operations

IT6: WebSocket Connection Loss and Recovery

IT7: ZMQ Client Disconnect and Reconnect

IT8: Concurrent Frontend and ZMQ Commands

State Synchronization Tests:

IT9: Elevator State Propagation

IT10: Multi-Elevator Coordination

1.2.4 Test Implementation

Test File Structure:

tests/integration/
├── test_component_workflows.py      # IT1-IT5
├── test_communication_protocols.py  # IT6-IT8  
├── test_state_synchronization.py    # IT9-IT10
├── test_error_scenarios.py          # Error condition integration
└── fixtures/
    ├── mock_zmq_client.py
    ├── mock_websocket.py
    └── integration_helpers.py

Test Categories:

Mock Objects Required:

Execution Commands:

# Run all integration tests
python -m pytest tests/integration/ -v

# Run specific test categories
python -m pytest tests/integration/test_component_workflows.py -v
python -m pytest tests/integration/test_communication_protocols.py -v
python -m pytest tests/integration/test_state_synchronization.py -v

# Run with coverage
python -m pytest tests/integration/ --cov=src/backend --cov-report=html

Coverage Metrics:

Traceability Matrix:

Test Case Components Integration Points Coverage Items
IT1 Dispatcher, Elevator, API, Engine 4 Call processing, Movement control, Notifications
IT2 Dispatcher, Elevators, API 3 Multi-elevator coordination, Load balancing
IT3 Elevator, Dispatcher, API 3 Dynamic queue management, Movement control
IT4 Elevator, API 2 Door control, Movement scheduling
IT5 All Components 6 System reset, State management
IT6 WebSocketBridge, Frontend, API 3 Connection management, State sync
IT7 ZmqClientThread, API 2 Message handling, Connection recovery
IT8 API, WebSocketBridge, ZmqClientThread 3 Concurrent processing, Command handling
IT9 Elevator, API, Bridges 4 State propagation, Interface consistency
IT10 Dispatcher, Elevators, API 4 Multi-elevator coordination, Optimization

Maintenance Procedures:

  1. Monthly Integration Review: Verify integration points remain valid
  2. Component Change Impact: Update integration tests when components modified
  3. Performance Baseline: Monitor integration test execution times
  4. Mock Synchronization: Keep mocks aligned with real component interfaces
  5. Coverage Validation: Ensure new integration points added to test suite

1.3 System Tests

System tests validate end-to-end functionality from a user perspective, covering both common usage patterns and rare edge cases that might occur in real-world deployment.

1.3.1 Common Workflows

Standard User Operations:

ST1: Basic Call and Ride

ST2: Multi-Floor Journey with Stops

ST3: Door Control Operations

ST4: System Reset Functionality

1.3.2 Rare Workflows and Edge Cases

Complex Operational Scenarios:

ST5: Elevator Overload Simulation

ST6: Rapid Sequential Commands

ST7: Simultaneous Multi-Interface Commands

ST8: Long-Duration Operation

Error Recovery Scenarios:

ST9: Network Disconnection Recovery

ST10: Invalid Command Sequences

Performance Edge Cases:

ST11: Maximum Floor Distance Travel

ST12: Minimum Response Time Scenario

1.3.3 Test Implementation

Test Environment Setup:

# System test configuration
SYSTEM_TEST_CONFIG = {
    'elevators': 2,
    'floors': ['-1', '1', '2', '3'],
    'door_timeout': 3.0,
    'floor_travel_time': 2.0,
    'door_operation_time': 1.0
}

Test File Structure:

tests/system/
├── test_common_workflows.py         # ST1-ST4
├── test_rare_scenarios.py           # ST5-ST8  
├── test_error_recovery.py           # ST9-ST10
├── test_performance_edge_cases.py   # ST11-ST12
├── load_testing/
│   ├── test_sustained_load.py
│   ├── test_peak_load.py
│   └── performance_metrics.py
└── fixtures/
    ├── system_test_harness.py
    ├── user_simulation.py
    └── performance_monitor.py

Automated Test Execution:

# Run all system tests
python -m pytest tests/system/ -v --tb=short

# Run specific workflow categories
python -m pytest tests/system/test_common_workflows.py -v
python -m pytest tests/system/test_rare_scenarios.py -v
python -m pytest tests/system/test_error_recovery.py -v

# Run performance tests with timing
python -m pytest tests/system/test_performance_edge_cases.py -v --durations=10

# Run load tests (extended duration)
python -m pytest tests/system/load_testing/ -v --timeout=3600

Test Coverage Metrics:

Acceptance Criteria Validation:

Test Case Response Time Success Rate Error Handling Performance
ST1 <15s total 100% N/A Normal load
ST2 <20s total 100% N/A Multi-user
ST3 <8s override 100% Graceful Manual control
ST4 <5s reset 100% Complete recovery System reset
ST5 <30s max wait 100% eventually Queue management High load
ST6 <3s response 100% filtered Duplicate handling Rapid input
ST7 <2s response 100% Conflict resolution Multi-interface
ST8 Consistent 100% Stable operation Long duration
ST9 <2s resync 100% Auto-recovery Network issues
ST10 Immediate reject 100% Clear errors Invalid input
ST11 8s + stops 100% N/A Max distance
ST12 <0.5s response 100% N/A Min distance

Performance Benchmarks:

Test Maintenance:

  1. Weekly Regression: Run full system test suite
  2. Performance Baseline: Monitor timing benchmarks monthly
  3. Load Testing: Quarterly peak load validation
  4. User Acceptance: Annual end-user workflow validation
  5. Documentation Updates: Keep test scenarios aligned with requirements

2. Model Checking

Model checking is employed for the formal verification of critical system properties using state-space exploration. Our elevator system is modeled using UPPAAL, a tool for modeling, simulation, and verification of real-time systems. Due to the inherent complexity of the real-world elevator system, our UPPAAL model incorporates necessary abstractions and simplifications to make formal verification feasible.

2.1 System Model

The system model in UPPAAL consists of several interacting timed automata templates: Elevator, Dispatcher, Passenger, and an Initializer. These templates define the behavior of individual components and their interactions through channels and shared variables.

Key Abstractions and Simplifications:

These simplifications are justified as they allow for tractable analysis of core system properties like deadlock freedom and request satisfaction, while still capturing the essential concurrent and real-time behaviors.

Global Declarations: The model defines global constants for system parameters, states (e.g., STATE_IDLE, DOOR_CLOSED), and timing values. Key data structures include:

Templates:

  1. Elevator Template:

  2. Dispatcher Template:

  3. Initializer Template:

2.2 Environment Model

The environment, primarily user interactions, is modeled by the Passenger template.

Passenger Template:

2.3 Verification Queries

The following queries were used to verify properties of the UPPAAL model. The results are based on the model execution with NUM_ELEVATORS = 2, NUM_PASSENGERS = 3, and specific passenger journeys (P0(0,1,3), P1(1,2,0), P2(2,2,3)).

  1. Query: E<> passenger_arrived[0] && passenger_arrived[1] && passenger_arrived[2]

  2. Query: A[] not deadlock

  3. Query: A[] (not E0.Moving or E0.door_state == DOOR_CLOSED) and (not E1.Moving or E1.door_state == DOOR_CLOSED)

These verification results provide confidence in the correctness of the modeled system aspects concerning passenger arrival, deadlock freedom, and door safety during movement.

3. Risk Management

3.1 Risk Analysis

This section identifies potential risks in the elevator system, analyzing their frequency and severity to ensure safe and reliable operation.

3.1.1 Passenger Service Risks

Risk R1: Passenger Call Never Serviced (Starvation)

Risk R2: Elevator Never Arrives at Called Floor

Risk R3: Doors Never Open at Destination Floor

3.1.2 System Reliability Risks

Risk R4: Deadlock Between Elevators

Risk R5: Elevator Moving with Doors Open

3.1.3 Performance and Efficiency Risks

Risk R6: Suboptimal Elevator Dispatching

Risk R7: Task Queue Overflow

3.1.4 Communication and Interface Risks

Risk R8: Message Loss Between Components

Risk R9: Invalid User Input Handling

3.2 Risk Mitigation

This section describes how each identified risk has been mitigated in the system design and implementation.

3.2.1 Passenger Service Risk Mitigation

Mitigation for R1 (Passenger Call Never Serviced)

Mitigation for R2 (Elevator Never Arrives)

Mitigation for R3 (Doors Never Open)

3.2.2 System Reliability Risk Mitigation

Mitigation for R4 (Deadlock)

Mitigation for R5 (Moving with Open Doors)

3.2.3 Performance Risk Mitigation

Mitigation for R6 (Suboptimal Dispatching)

Mitigation for R7 (Task Queue Overflow)

3.2.4 Communication Risk Mitigation

Mitigation for R8 (Message Loss)

Mitigation for R9 (Invalid Input)